Recurrent neural networks can be trained to be maximum a posteriori probability classifiers
نویسندگان
چکیده
This paper proves that supervised learning algorithms used to train recurrent neural networks have an equilibrium point when the network implements a Maximum A Posteriori Probability (MAP) classiier. The result holds as a limit when the size of the training set goes to innnity. The result is general, since it stems as a property of cost minimizing algorithms, but to prove it we implicitly assume that the network we are training has enough computing power to actually implement the MAP classiier. This assumption can be satissed using a universal dynamic system approximator. We refer our discussion to Block Feedback Neural Networks (B F Ns) and show that they actually have the universal approximation property.
منابع مشابه
طبقهبندی آریتمیهای قلبی مبتنی بر ترکیب نتایج شبکههای عصبی با نظریه شواهد دمپستر- شفر
Cardiac arrhythmias are one of the most common heart diseases that may cause the death of the patient. Therefore, it is extremely important to detect cardiac arrhythmias. 3 categories of arrhythmia, namely, PAC, PVC, and normal are considered in this paper based on classifier fusion using evidence theory. In this study, at first a sample is carrying out the ECG signal with 250 point. Moreove...
متن کاملA Neural Network Based Approach for ESM/Radar Track Association
In this paper, a neural network based ESM/radar track association algorithm is presented. The algorithm consists of a feed-forward neural network and a probability combiner. The neural network classifier is trained using the radar bearing measurements as well as their time stamps to approximate the a posteriori probabilities. The ESM bearing measurements along with their time stamps are fed to ...
متن کاملAny reasonable cost function can be used for a posteriori probability approximation
In this paper, we provide a straightforward proof of an important, but nevertheless little known, result obtained by Lindley in the framework of subjective probability theory. This result, once interpreted in the machine learning/pattern recognition context, puts new light on the probabilistic interpretation of the output of a trained classifier. A learning machine, or more generally a model, i...
متن کاملNeural Network Classifiers Estimate Bayesian a posteriori Probabilities
Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are 2 of M (one output unity, all others zero) and a squarederror or cross-entropy cost function is used. Resu...
متن کاملGlobal crustal thickness from neural network inversion of surface wave data
S U M M A R Y We present a neural network approach to invert surface wave data for a global model of crustal thickness with corresponding uncertainties. We model the a posteriori probability distribution of Moho depth as a mixture of Gaussians and let the various parameters of the mixture model be given by the outputs of a conventional neural network. We show how such a network can be trained o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neural Networks
دوره 8 شماره
صفحات -
تاریخ انتشار 1995